翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Bhattacharyya coefficient : ウィキペディア英語版
Bhattacharyya distance

In statistics, the Bhattacharyya distance measures the similarity of two discrete or continuous probability distributions. It is closely related to the Bhattacharyya coefficient which is a measure of the amount of overlap between two statistical samples or populations. Both measures are named after Anil Kumar Bhattacharya, a statistician who worked in the 1930s at the Indian Statistical Institute.〔

The coefficient can be used to determine the relative closeness of the two samples being considered. It is used to measure the separability of classes in classification and it is considered to be more reliable than the Mahalanobis distance, as the Mahalanobis distance is a particular case of the Bhattacharyya distance when the standard deviations of the two classes are the same. Therefore, when two classes have similar means but different standard deviations, the Mahalanobis distance would tend to zero, however, the Bhattacharyya distance would grow depending on the difference between the standard deviations. (this paragraph is possibly wrong information.)
== Definition ==

For discrete probability distributions ''p'' and ''q'' over the same domain ''X'', the Bhattacharyya distance is defined as:
:D_B(p,q) = -\ln \left( BC(p,q) \right)
where:
:BC(p,q) = \sum_ \sqrt
is the Bhattacharyya coefficient.
For continuous probability distributions, the Bhattacharyya coefficient is defined as:
:BC(p,q) = \int \sqrt\, dx
In either case, 0 \le BC \le 1 and 0 \le D_B \le \infty. D_B does not obey the triangle inequality, but the Hellinger distance \sqrt does obey the triangle inequality.
In its simplest formulation, the Bhattacharyya distance between two classes under the normal distribution can be calculated 〔Guy B. Coleman, Harry C. Andrews, "Image Segmentation by Clustering", ''Proc IEEE'', Vol. 67, No. 5, pp. 773–785, 1979〕 by extracting the mean and variances of two separate distributions or classes:
:D_B(p,q) = \frac \ln \left ( \frac 1 4 \left( \frac+\frac+2\right ) \right ) +\frac \left ( \frac\right )
where:
:
The Mahalanobis distance used in Fisher's linear discriminant analysis is a particular case of the Bhattacharyya Distance. When the variances of the two distributions are the same the first term of the distance is zero as this term depends solely on the variances of the distributions (left case of the figure). The first term will grow as the variances differ (right case of the figure). The second term, on the other hand, will be zero if the means are equal and is inversely proportional to the variances.

For multivariate normal distributions p_i=\mathcal(\boldsymbol\mu_i,\,\boldsymbol\Sigma_i),
: D_B=(\boldsymbol\mu_1-\boldsymbol\mu_2)^T \boldsymbol\Sigma^(\boldsymbol\mu_1-\boldsymbol\mu_2)+\ln \,\left(\right),
where \boldsymbol\mu_i and \boldsymbol\Sigma_i are the means and covariances of the distributions, and
: \boldsymbol\Sigma=.
Note that, in this case, the first term in the Bhattacharyya distance is related to the Mahalanobis distance.


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bhattacharyya distance」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.